Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase. However, the out-of-distribution (OOD) generalization problem remains a challenge in many NLP tasks, limiting the real-world deployment of these methods. This paper presents the first attempt at creating a unified benchmark named GLUE-X for evaluating OOD robustness in NLP models, highlighting the importance of OOD robustness and providing insights on how to measure the robustness of a model and how to improve it. The benchmark includes 13 publicly available datasets for OOD testing, and evaluations are conducted on 8 classic NLP tasks over 19 popularly used PLMs. Our findings confirm the need for improved OOD accuracy in NLP tasks, as significant performance degradation was observed in all settings compared to in-distribution (ID) accuracy.
translated by 谷歌翻译
网络安全漏洞信息通常由多个渠道记录,包括政府漏洞存储库,个人维护的漏洞收集平台或漏洞披露的电子邮件列表和论坛。从不同渠道整合脆弱性信息可以使全面的威胁评估和快速部署到各种安全机制。但是,当今实体一致性技术的局限性阻碍了自动收集此类信息的努力。在我们的研究中,我们注释了第一个网络安全域实体对齐数据集并揭示安全实体的独特特征。基于这些观察结果,我们提出了第一个网络安全实体对准模型CEAM,该模型CAM,该模型为基于GNN的实体比对配备了两种机制:不对称的掩盖聚集和分区的注意力。网络安全域实体比对数据集的实验结果表明,CEAM明显优于最先进的实体比对方法。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
由于缺乏标签信息,异常检测是机器学习中的基本但具有挑战性的问题。在这项工作中,我们提出了一种新颖而强大的框架,称为SLA $ ^ 2 $ P,用于无监督的异常检测。在从原始数据中提取代表性嵌入后,我们将随机投影应用于特征,并将不同投影转换的特征视为属于不同的伪类。然后,我们在这些转换功能上培训一个分类器网络,以执行自我监督的学习。接下来,我们向变换特征添加对冲扰动,以减少预测标签的软MAX分数,并基于这些扰动特征对分类器的预测不确定性来降低预测标签和设计异常分数。我们的动机是,由于相对较小的数量和分散的异常模式,1)伪标签分类器的培训更集中学习正常数据的语义信息而不是异常数据; 2)正常数据的转换特征比异常的扰动更强大。因此,异常的扰动转化的特征不能良好分类,因此具有比正常样本的异常分数低。在图像,文本和固有的表格基准数据集上进行了广泛的实验,并表明SLA $ ^ 2 $ p实现了最先进的导致无监督的异常检测任务一致。
translated by 谷歌翻译
非政策评估(OPE)是用其他策略生成的数据评估目标策略。大多数以前的OPE方法都侧重于精确估计策略的真实绩效。我们观察到,在许多应用程序中,(1)OPE的最终目标是比较两个或多个候选策略并选择一个好的策略,这比精确评估其真实绩效要简单得多; (2)通常已经部署了多种政策来为现实世界中的用户提供服务,因此可以知道这些策略的真实绩效。受到这两个观察结果的启发,在这项工作中,我们研究了一个新问题,监督了政体排名(SOPR),该排名旨在通过利用现有绩效的非政策数据和策略来对基于监督学习的一组目标策略进行排名。我们提出了一种解决SOPR的方法,该方法通过最大程度地减少培训政策的排名损失而不是估算精确的政策绩效来学习政策评分模型。我们方法中的评分模型是一个基于层次变压器的模型,将一组状态行动对映射到一个分数,其中每对的状态来自非政策数据,而目标策略是在状态上采取的。以离线方式。公共数据集的广泛实验表明,我们的方法在等级相关性,遗憾价值和稳定性方面优于基线方法。我们的代码在GitHub公开获得。
translated by 谷歌翻译
随机上下文的匪徒问题,建造了勘探和开发之间的权衡取舍,具有许多真实的应用,包括推荐系统,在线广告和临床试验。与许多其他机器学习算法一样,上下文匪徒算法通常具有一个或多个超参数。例如,在大多数最佳的随机上下文匪徒算法中,有一个未知的探索参数可以控制勘探和开发之间的权衡。适当的超参数选择对于上下文的匪徒算法表现良好至关重要。但是,由于没有预采用的数据集,因此必须使用离线调谐方法在上下文匪徒环境中选择超参数,并且必须实时做出决策。为了解决这个问题,我们首先提出了一个两层匪徒结构,用于自动调整勘探参数并将其进一步推广到联合匪徒框架,该框架可以在上下文的匪徒环境中动态学习多个超参数。我们得出了我们提议的联合匪徒框架的遗憾界限,并表明它可以避免对要调整的超参数的数量成倍依赖。此外,它在某些情况下达到了最佳的遗憾界限。联合匪徒框架足够通用,可以在许多流行的上下文匪徒算法(例如Linucb,Lints,UCB-GLM等)中处理调整任务。在合成数据集和真实数据集上进行了实验,以验证我们提出的框架的有效性。
translated by 谷歌翻译
许多最近的作品已经提出了培训具有本地鲁棒性属性的分类器的方法,这可以针对大多数投入证明可以消除逃离攻击的类别,但并非所有输入。由于数据分发Shift在安全应用程序中非常常见,因此通常观察到恶意软件检测,因此本地鲁棒性无法保证在部署分类器时的未经持续输入。因此,更希望强制实施所有输入的全局鲁棒性属性,这严格强于局部鲁棒性。在本文中,我们为满足全球鲁棒性属性的培训分类器提供了一种框架和工具。我们定义了全局稳健性的新概念,更适合安全分类器。我们设计一个新颖的助推器机构训练框架,以实施全球鲁棒性属性。我们将Classifier构建为逻辑规则的集合,并设计一个新的验证者来验证属性。在我们的训练算法中,助推器增加了分类器的容量,并且固定器在经次引导的电感合成后验证了验证的全局鲁棒性属性。我们表明我们可以培训分类器来满足三个安全数据集的不同全局鲁棒性属性,甚至同时多个属性,对分类器的性能进行适度影响。例如,我们训练Twitter垃圾邮件帐户分类器以满足五个全局鲁棒性属性,而真正的阳性率下降5.4%,而假阳性率的增加0.1%,而不是不满足任何财产的基线XGBoost模型。
translated by 谷歌翻译
口语语言理解已被处理为监督的学习问题,其中每个域都有一组培训数据。但是,每个域的注释数据都是经济昂贵和不可扩展的,因此我们应该充分利用所有域的信息。通过进行多域学习,使用跨域的联合训练的共享参数来解决一个现有方法解决问题。我们建议通过使用域特定和特定于任务的模型参数来改善该方法的参数化,以改善知识学习和传输。5个域的实验表明,我们的模型对多域SLU更有效,并获得最佳效果。此外,当适应具有很少数据的新域时,通过优于12.4 \%来表现出先前最佳模型的可转换性。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译